24 research outputs found
Multimedia Semantic Integrity Assessment Using Joint Embedding Of Images And Text
Real world multimedia data is often composed of multiple modalities such as
an image or a video with associated text (e.g. captions, user comments, etc.)
and metadata. Such multimodal data packages are prone to manipulations, where a
subset of these modalities can be altered to misrepresent or repurpose data
packages, with possible malicious intent. It is, therefore, important to
develop methods to assess or verify the integrity of these multimedia packages.
Using computer vision and natural language processing methods to directly
compare the image (or video) and the associated caption to verify the integrity
of a media package is only possible for a limited set of objects and scenes. In
this paper, we present a novel deep learning-based approach for assessing the
semantic integrity of multimedia packages containing images and captions, using
a reference set of multimedia packages. We construct a joint embedding of
images and captions with deep multimodal representation learning on the
reference dataset in a framework that also provides image-caption consistency
scores (ICCSs). The integrity of query media packages is assessed as the
inlierness of the query ICCSs with respect to the reference dataset. We present
the MultimodAl Information Manipulation dataset (MAIM), a new dataset of media
packages from Flickr, which we make available to the research community. We use
both the newly created dataset as well as Flickr30K and MS COCO datasets to
quantitatively evaluate our proposed approach. The reference dataset does not
contain unmanipulated versions of tampered query packages. Our method is able
to achieve F1 scores of 0.75, 0.89 and 0.94 on MAIM, Flickr30K and MS COCO,
respectively, for detecting semantically incoherent media packages.Comment: *Ayush Jaiswal and Ekraam Sabir contributed equally to the work in
this pape
Zero-Shot Learning by Convex Combination of Semantic Embeddings
Several recent publications have proposed methods for mapping images into
continuous semantic embedding spaces. In some cases the embedding space is
trained jointly with the image transformation. In other cases the semantic
embedding space is established by an independent natural language processing
task, and then the image transformation into that space is learned in a second
stage. Proponents of these image embedding systems have stressed their
advantages over the traditional \nway{} classification framing of image
understanding, particularly in terms of the promise for zero-shot learning --
the ability to correctly annotate images of previously unseen object
categories. In this paper, we propose a simple method for constructing an image
embedding system from any existing \nway{} image classifier and a semantic word
embedding model, which contains the \n class labels in its vocabulary. Our
method maps images into the semantic embedding space via convex combination of
the class label embedding vectors, and requires no additional training. We show
that this simple and direct method confers many of the advantages associated
with more complex image embedding schemes, and indeed outperforms state of the
art methods on the ImageNet zero-shot learning task
A Scale Independent Selection Process for 3D Object Recognition in Cluttered Scenes
During the last years a wide range of algorithms
and devices have been made available to easily acquire range
images. The increasing abundance of depth data boosts
the need for reliable and unsupervised analysis techniques,
spanning from part registration to automated segmentation.
In this context, we focus on the recognition of known objects
in cluttered and incomplete 3D scans. Locating and fitting a
model to a scene are very important tasks in many scenarios
such as industrial inspection, scene understanding, medical
imaging and even gaming. For this reason, these problems
have been addressed extensively in the literature. Several
of the proposed methods adopt local descriptor-based
approaches, while a number of hurdles still hinder the use
of global techniques. In this paper we offer a different
perspective on the topic: We adopt an evolutionary selection
algorithm that seeks global agreement among surface points,
while operating at a local level. The approach effectively
extends the scope of local descriptors by actively selecting
correspondences that satisfy global consistency constraints,
allowing us to attack a more challenging scenario where
model and scene have different, unknown scales. This leads
to a novel and very effective pipeline for 3D object recognition,
which is validated with an extensive set of experiment
10 Object Recognition Using Locality-Sensitive Hashing of Shape Contexts
At the core of many computer vision algorithms lies the task of finding a correspondence between image features local to a part of an image. Once these features are calculated, matching is commonly performed using a nearest-neighbor algorithm. In this chapter, we focus on the topic of object recognition, and examine how the complexity of a basic feature-matching approach grows with the number of object classes. We use this as motivation for proposing approaches to featurebased object recognition that grow sublinearly with the number of object classes. 10.1 Regional Descriptor Approach Our approach to object recognition relies on the matching of feature vectors (also referred to here as features) which characterize a region of a two-dimensional (2D) or 3D image, where by “3D image ” we mean the point cloud resulting from a range scan. We use the term descriptor to refer to the method or “template ” for calculating the feature vector